docs(rfd): Сustom LLM endpoints#648
Open
xtmq wants to merge 12 commits intoagentclientprotocol:mainfrom
Open
Conversation
anna239
reviewed
Mar 5, 2026
benbrandt
reviewed
Mar 5, 2026
Member
benbrandt
left a comment
There was a problem hiding this comment.
Overall I really like this! Some questions but no major objections
|
nice sir!!! |
Contributor
|
good PR, we do need it |
…custom LLM endpoint RFD
xtmq
commented
Mar 7, 2026
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.This suggestion is invalid because no changes were made to the code.Suggestions cannot be applied while the pull request is closed.Suggestions cannot be applied while viewing a subset of changes.Only one suggestion per line can be applied in a batch.Add this suggestion to a batch that can be applied as a single commit.Applying suggestions on deleted lines is not supported.You must change the existing code in this line in order to create a valid suggestion.Outdated suggestions cannot be applied.This suggestion has been applied or marked resolved.Suggestions cannot be applied from pending reviews.Suggestions cannot be applied on multi-line comments.Suggestions cannot be applied while the pull request is queued to merge.Suggestion cannot be applied right now. Please check back later.
title: "Configurable LLM Providers"
Elevator pitch
Add the ability for clients to discover and configure agent LLM providers (identified by
id) via dedicated provider methods:providers/listproviders/setproviders/removeThis allows clients to route LLM requests through their own infrastructure (proxies, gateways, or self-hosted models) without agents needing to know about this configuration in advance.
Status quo
ACP does not currently define a standard method for configuring LLM providers.
In practice, provider configuration is usually done via environment variables or agent-specific config files. That creates several problems:
This particularly affects:
Shiny future
Clients will be able to:
Implementation details and plan
Intended flow
sequenceDiagram participant Client participant Agent Client->>Agent: initialize Agent-->>Client: initialize response (agentCapabilities.providers = true) Client->>Agent: providers/list Agent-->>Client: providers/list response Client->>Agent: providers/set (id = "main") Agent-->>Client: providers/set response Client->>Agent: providers/remove (optional) Agent-->>Client: providers/remove response Client->>Agent: session/newagentCapabilities.providers.providers/listto discover available providers, their current routing targets, and supported protocol types.providers/setto apply new (required) configuration for a specific provider id.providers/removewhen a provider configuration should be removed.Capability advertisement
The agent advertises support with a single boolean capability:
If
providersis absent orfalse, clients must treat provider methods as unsupported.Types
providers/listproviders/setproviders/setupdates the full configuration for one provider id.providers/removeExample exchange
initialize Response:
{ "jsonrpc": "2.0", "id": 0, "result": { "protocolVersion": 1, "agentInfo": { "name": "MyAgent", "version": "2.0.0" }, "agentCapabilities": { "providers": true, "sessionCapabilities": {} } } }providers/list Request:
{ "jsonrpc": "2.0", "id": 1, "method": "providers/list", "params": {} }providers/list Response:
{ "jsonrpc": "2.0", "id": 1, "result": { "providers": [ { "id": "main", "apiType": "anthropic", "baseUrl": "http://localhost/anthropic", "supported": ["bedrock", "vertex", "azure", "anthropic"] }, { "id": "openai", "apiType": "openai", "baseUrl": "http://localhost/openai", "supported": ["openai"] } ] } }providers/set Request:
{ "jsonrpc": "2.0", "id": 2, "method": "providers/set", "params": { "id": "main", "apiType": "anthropic", "baseUrl": "https://llm-gateway.corp.example.com/anthropic/v1", "headers": { "X-Request-Source": "my-ide" } } }providers/set Response:
{ "jsonrpc": "2.0", "id": 2, "result": {} }providers/remove Request:
{ "jsonrpc": "2.0", "id": 3, "method": "providers/remove", "params": { "id": "openai" } }providers/remove Response:
{ "jsonrpc": "2.0", "id": 3, "result": {} }Behavior
agentCapabilities.providers: trueininitialize. Clients SHOULD only callproviders/*when this capability is present andtrue.initialize. Clients SHOULD configure providers before creating or loading sessions. Agents MAY choose not to apply changes to already running sessions, but SHOULD apply them to sessions created or loaded after the change.providers/listreturns configurable providers, their current routing (apiType,baseUrl), and supported protocol types.providers/setreplaces the full configuration for the targetid(apiType,baseUrl, fullheaders). Ifidis unknown,apiTypeis unsupported for that provider, or params are malformed, agents SHOULD returninvalid_params.providers/removeremoves runtime configuration for the targetid. Removing an unknownidSHOULD be treated as success (idempotent behavior).Frequently asked questions
Why not a single
providers/updatemethod for full list replacement?A full-list update means the client must send complete configuration (including
headers) for all providers every time.If the client wants to change only one provider, it may not know headers for the others. In that case it cannot safely build a correct full-list payload.
Also,
providers/listdoes not return headers, so the client cannot simply "take what the agent returned" and send it back with one edit.Per-provider methods (
setandremove) avoid this problem and keep updates explicit.Why doesn't
providers/listreturn headers?Header values may contain secrets and should not be echoed by the agent.
providers/listis intentionally limited to non-secret routing information (apiType,baseUrl).Why is this separate from
initializeparams?Clients need capability discovery first, then provider discovery, then configuration. A dedicated method family keeps initialization focused on negotiation and leaves provider mutation to explicit steps.
Revision history
providers/list,providers/set,providers/remove)LlmProtocolan open string type with well-known values; resolve open questions on identifier standardization and model availabilitysetLlmEndpointsmethod with capability advertisement